Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SIMD-110: Exponential fee for write lock accounts #110

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

tao-stones
Copy link
Contributor

@tao-stones tao-stones commented Jan 19, 2024

to introduce economic back pressure to make spammer back off

- Identify write-locked accounts with *compute-unit utilization* > half of
account max CU limit. Add/update bank's account_write_lock_fee_cache.
- Adding new account into LRU cache could push out eldest account;
- LRU cache has capacity of 1024, which should be large enough for hot accounts

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LRU the same as evicting lowest cost accounts first?


### Other Considerations

- Users may need new instruction to set a maximum write-lock fee for transaction

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

v0, 1% increase means that worst case is 4.4x increase over 150 slots for a block hash. That should be good enough for wallets to show to users as an estimate

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how would a maximum write-lock fee instruction work? at what point does the tx get rejected?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would imagine it would be a max fee as part of the budget program.

I would rather put this complexity outside of the runtime. Runtime needs a cheap fast priority, the non priority fees can be estimated by the wallets off chain.

- End of Block Processing:
- Identify write-locked accounts with *compute-unit utilization* > half of
account max CU limit. Add/update bank's account_write_lock_fee_cache.
- Adding new account into LRU cache could push out eldest account;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is eldest determined? If we added 2 accounts in slot N, the oldest slot in the cache, and we now need to evict 1, how do we determine which one?
Potentially could also look at cost in block N, but what if they had the same cost?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it should be eldest. It should be cheapest account gets evicted.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be cheapest gets evicted with a (much) larger cache size i believe; otherwise you can end up in a world where an account that starts at 0 fee (or whatever the min fee is) never gets off the ground cause it never makes it into the cache

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New account > 6m CU in usage evicts the cheapest one from the cache at the end of the block.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, cheapest not eldest, I think that makes sense. But that doesn't address that we need to propose a way to resolve ties - otherwise clients could implement that logic differently and we end up with consensus failures.

If 2 accounts have the same cost, and were most recently accessed in same block, how do we resolve that tie in a deterministic way? also possible we just evict all tied accounts.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would implementing a (small) global CU base fee help resolve this issue? The local per-account fees would then start at this base fee and increase/decrease from there. This type of system would avoid the step change when an account is added/removed from the cache.

proposals/0110-exponential-fee-for-write-lock-accounts.md Outdated Show resolved Hide resolved
account max CU limit. Add/update bank's account_write_lock_fee_cache.
- Adding new account into LRU cache could push out eldest account;
- LRU cache has capacity of 1024, which should be large enough for hot accounts
in 150 slots.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the relevance of 150 slots here?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's the # of slots until blockhash expiry, but also not sure why it's relevant here

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Attacker can saturate 128 * 8 accounts per block

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Max accounts in a tx is 128. Max txs with the accounts hitting the 6m CU limit is 8

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taozhu-chicago

i don't think 150 slot ema makes sense. 1.01x or 1/1.01x on every block depending if the account > 6m or < 6m, that will track the average already.

Then the cache eviction policy is simple.

Cache size = 2x * worst case number of accounts > 6m per block

Eviction = any new account > 6m CUs at the end of the block are added to the cache with a write lock fee of K. Cheapest accounts are evicted.

Fee updates = if account is in the cache and > 6m fee is 1.01x, otherwise fee is 1/1.01x

With this setup an account that is saturated 100 / 150 blocks will have something like 1.01^100 * 1.01^-50 or 1.01^50 which is what we want.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the worry a malicious leader doing an attack to evict cache? if that's the concern, then we should be using a cache-size of 4096 since we allocate leader-slots in chunks of 4.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@apfitzge second set would evict the first, not the most expensive. 2048 is all we need

Copy link

@eugene-chen eugene-chen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

- Pricing Algorithm:
- Adjusts write-lock *cost rate* based on an account's EMA *compute-unit
utilization*. Initial write-lock cost rate is `1000 lamport/CU`.
- For each block, if an account's EMA *compute-unit utilization* is more than

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why a discrete change at 50%? why not increase_pct = f(utilization) where f(0%) = -max, f(100%) = +max (perhaps max = 1%) with a continuous function (e.g. linear)?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and if it's gonna be discrete, why not e.g. 33%?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the shape of the controller deserves some study. understood there is a desire for tx senders to know what the max they might pay is. price as an increasing function of utilization (which all these proposals provide) is already much better than status quo, but a significantly more aggressive increase would help with discrete opportunities like NFT mints and one-time arbs

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea. I agree. I think it's worth picking something reasonable and then adjusting it in the future. 1% per block increase puts the worst case on a tx to 4.4x

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Discrete makes it easier to manage a cache. Accounts that cross 6m evict the cheapest from the cache.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Related to @eugene-chen's comment: the choice of 50% implies some target ("utilization <= 50% is fine; over 50% is undesirable"). Since the EMA is tracked anyway, could you have inc_pct = f(utilization) be something like constant * (utilization - target) ?

Since the increase/decrease is in percentage, implicitly the calculation is being done in "logarithmic space" so f(utilization) = exp(constant * (utilization - target) ) is more appropriate. (We have some work that suggests that the current price should probably appear in the exponent as well: see appendix c of this paper.)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discrete is a better devX for clients to figure out pricing

current bank is frozen.
- Provides the current *cost rate* when queried.
- EMA of Compute-Unit Utilization:
- Uses 150 slots for EMA calculation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assuming this constant is roughly a guess; what's the reasoning to make this the same as transaction expiry time?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think ema makes sense here. Since the fee change is 1.01x per block it will average out already. No need to track an ema.

proposals/0110-exponential-fee-for-write-lock-accounts.md Outdated Show resolved Hide resolved
- Calculate write-lock fee for each account a transaction needs to write,
summing up to be its *write lock fee*. This, along with signature fee and
priority fee, constitutes the total fee for the transaction.
- Leader checks fee payer's balance before scheduling the transaction.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prio fee today and this proposed fee are priced per CU requested. if desired, could also price per CU used, by beginning the tx by cost rate * CU requested and ending the tx by rebating cost rate * (CU requested - CU used)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No rebates please. We need devs to correctly estimate what they are using, not request max.

Copy link

@siong1987 siong1987 Jan 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exactly, rebate will just make all transactions setting the CU limit to max.

Comment on lines +65 to +67
- Accounts are associated with a *compute unit pricer*, and the *runtime*
maintains an LRU cache of actively contentious accounts' public keys and
their *compute unit pricers*.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very nice way to avoid adding new field to AccountInfo!

- Acknowledge read lock contention, deferring EMA fee implementation for read locks.
- In the future, a percentage of collected write-lock-fee could be deposited
to an account, allowing dApps to refund cranks and other service providers.
This decision should be done via a governance vote.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why governance? governance by whom? what changes belong to governance and what changes belong to e.g. SIMD process?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stake weighed median signaled by validators. Would need a simd in the future

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I will argue in that future SIMD instead :)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think some program account deposit is necessary to prevent write lock spamming oracles and things like that. But that can be turn on in v2

proposals/0110-exponential-fee-for-write-lock-accounts.md Outdated Show resolved Hide resolved
proposals/0110-exponential-fee-for-write-lock-accounts.md Outdated Show resolved Hide resolved
Comment on lines +68 to +69
- Alternatively, each account can have its *compute unit pricer* stored
onchain, which would require modifying accounts.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

....but if there are spare bytes, this can be done cheaply (and you can avoid the cache and have a price on literally every account) with two u64s: store last_slot_touched and an int representing log(fee rate) / log(1.01)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would require storage forever in the state. An LRU cache should be sufficient.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only if it is big enough!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it only needs to be 2*worst case evictions per block

to an account, allowing dApps to refund cranks and other service providers.
This decision should be done via a governance vote.

## Impact

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one addition: if fee is paid per CU requested, this additionally incentivizes accurate CU estimation beyond what the prio fee already does

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another addition (less obvious if good or bad): this increases the incentive for app developers to fit more of app state into a single account rather than having a bunch of accounts

and incentivizes some type of account-sybiling, e.g. the "canonical account for some app state" rotates every N slots to allow the fee to cool down

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Won't it have the opposite effect? If that account is saturated then fees increase super-linearly.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(assuming this is referring to the first note in the second comment) Today a bunch of programs require always hitting accounts A B and C so the user would have to pay for 3 account writes whose price escalate together. So there's an incentive for the developer to compress the state to a single account A to decrease write-lock fee by 2/3

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@taozhu-chicago might make sense to scale it by bytes. So there is little advantage to combining accounts.

So each account starts at a lamports per CU per byte rate (LCBR) that scales 1.01x each block.

One way to fit this into existing fee model is to

  • lower the existing signature fee by 50%
  • have validators earn 100% of the signature fee
  • set the floor LCBR to match what votes currently use, so whatever vote writes to be 50% of existing signature fee. Burn 100%

So basically math works out to be the same for votes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

account bytes are converted into CU by cost model, feel it's simpler to stay with "lamports_per_cu". As for vote, it has const CU cost, it can also have const fee, assume it is "simple vote transaction", and all "complex vote transactions" are dropped.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

complex votes can pay the complex fee :), they can be dropped by leaders, but i don't think we need to make them invalid in the runtime for this change.

- Calculate write-lock fee for each account a transaction needs to write,
summing up to be its *write lock fee*. This, along with signature fee and
priority fee, constitutes the total fee for the transaction.
- Leader checks fee payer's balance before scheduling the transaction.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't necessary for the proposal, even if it is how the labs client will do it; scheduling is outside consensus and doesn't affect the fee.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaders should drop invalid fee payers as early as possible though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah definitely, but that's not related to this proposal.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is assumed leader will do fee payer check upfront, but if it doesn't (for whatever reason), invalid fee payer will be dropped during loading (after locking). That's make it less effective, but still improvement.

@y2kappa
Copy link

y2kappa commented Jan 20, 2024

I strongly disagree with this proposal. It seems like an engineer's solution to surge pricing via an arbitrary mathematical formula instead of letting the market do its thing. We already have dynamic pricing via priority fees. That is all you should need. The market is doing what it is incentivised to do, so why are priority fees not enough? It seems like the implementation of the scheduler or sorting of priority fees are wrong.

A few questions/observations:

  • This will probably not work, so will you then proceed to do surge pricing on top of surge pricing? My gut feeling in absence of a rigorous backtesting of this proposal is that it will only slow it down.
  • The solution looks arbitrary and somehow trying to be a low compute cost solution to price discovery for resources. This is very akin to price fixing. Seems just wrong. Just as wrong as pricing based on number of signatures. It won't do much. You just raise the floor where spam happens, but it will still happen.
  • Has this been backtested / simulated on top of real world txns, not some toy example? I have a strong suspicion that this will significantly spill over other genuine users and will have unintended consequences? I am actually really shocked that this proposal is approved without actual economic analysis. This is an economic problem, not an engineering problem that you just simply unit test.
  • Can it easily be rolled back if it proves to be wrong/broken? What is the process for that? I have a feeling that it will just break the structure of the market, not fix it, and we will be stuck with it permanently.

Imo we are treating the symptom instead of the disease and it requires much more rigorous analysis. I understand that the time to market of this solution is quicker and we have real issues right now in production, but this seems like the wrong solution.

@brianlong
Copy link

brianlong commented Jan 21, 2024

Like physical real estate, block space will go to the highest & best use as determined by the highest bidder. We rely on free markets for price discovery. This proposal introduces an artificial pricing mechanism and will only distort the free market.

Priority Fees (PF), our current method of free market price discovery, are starting to work -- we see applications adapting with dynamic PFs, and important TXs are landing. We'll do better to improve the effectiveness of the current market-based system. Distorting the free market is a step backward.

The motivation of this proposal is mis-guided -- the outcome of processing a TX is irrelevant when the sender pays a (higher) fee. Beauty is in the eye of the beholder, and transaction success is in the eye of the bidder. What you see as a failed defi transaction was a success for the trader because they didn't lose money. Traders are willing to pay TX fees to avoid losses -- that's a feature, not a bug! #OPOS

The current cNFT scams are much more troubling. With account compression, we made it incredibly cheap for scammers to send fake cNFTs trying to steal money. For example, someone is sending scam NFTs to Block Logic stakeholders and attempting to steal stake accounts. I chatted with one of the victims, and he is pissed at me and Solana for his losses. He may never come back, and I don't blame him! The defi bots will do us a favor if they outbid the scammers.

We are in the business of processing transactions for a fee. Let's avoid judgment calls about which use cases are good or bad -- let the market decide what's highest & best. I'll take defi traffic over cNFT scams any day.

The original vision for market-based PFs was correct. We should persevere to improve the current PF system. Pivoting here is the wrong move.

Copy link

@crispheaney crispheaney left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there anyway to estimate how this will affect DeFi apps where users desire to lock an account every block? E.g. clobs, oracles

While the priority fee serves to mitigate low-cost spams by decreasing the
likelihood of less prioritized transactions being included, it cannot entirely
eliminate the inclusion of spam transactions in a block. As long as there
remains a chance, no matter how small, to inexpensively include transactions,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pretty big assumption. I'd like to see the tx scheduler improvements land in 1.18 before this gets seriously considered. Given how the tx scheduler is currently implemented, it's hard to say priority fees aren't sufficient to adjudicate access to blockspace.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am confident improved scheduler in 1.18 will respect priority fee much better, but there is still chance, hight be very small, that lower prio tx lands before higher one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the probability of low-prio txs getting included is low. Then it seems like the EMA raising fees will mainly affect the non-spam devs/users who will now see raised fees. If non-spam is anywhere close to 6M CUs/block, then non-spam users pay and low-priority spammers will only get hit by high fees very rarely?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

user will pay more when capacity is reduced; once this is reliably predictive, spammer will have to back off; and normal users pay resources per demand.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@crispheaney steady state ingress load on leaders is 50kpps+. The pipeline to dedup/sigverify/fee check, has to handle all that load before it gets to the scheduler. If it takes more then 400ms, tx isn't getting prioritized in that block. The only way to get spammers to send fewer txs is to raise the base layer fee.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot 2024-01-23 at 9 18 31 PM

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so if txs take too long to get through the pipeline, prioritization is not effective. Does that make sense? Doesn't matter what the scheduler does, if it can't see txs because they are still in the queues. we need to force sender to stop economically

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems reasonable we focus on improving the throughput scheduling and pre-scheduling stages so that it's more likely txs are able to be prioritized.

If the chance of inclusion for spam is low, then I'm not convinced they will back off.
Fee-payer is can be totally separate from the account funding economic activity for arbitrage.
I can keep my fee-payer balance low and continue to spam. If the fees get too high for my account to fund, then oh well my tx won't make it into the block because the leader won't include me since I can't pay fees.

@eugene-chen
Copy link

to everyone saying "real priority fees have never been tried", can you link to a spec for what the behavior is supposed to be and how that improves the UX? pardon my ignorance but i don't think i've ever seen a description of what the intended behavior of this system is

even a perfect first-price gas auction (which is not achievable with continuous block building) is worse than a protocol-enforced min fee, in terms of UX

@crispheaney
Copy link

The SIMD talks about deterring spam, not improving UX. Perhaps the SIMD can discuss UX improvements as well if that is a strong reason to adopt this.

@kevinheavey
Copy link

This needs to offer some arguments for why it's bad for a high % of transactions to fail. Or identify a more specific problem than the failure rate across all transactions

@SOELTH
Copy link

SOELTH commented Jan 22, 2024

Bringing such a pricing scheme into the protocol, imo, is a colossal mistake for two main reasons:

  • Spam --the issue this proposal claims to solve-- is a result of Jitter in QUIC server and Banking Stage, and can be mostly mitigated simply by reducing jitter.

Jitter in banking stage and quic server is ridiculous. So much that the current priority fee system has yet to be properly tried. I'm hesitant to rule our current scheme as insufficient so hastily.

  • Bringing this scheme in-protocol encroaches on a validator's agency to price its compute as it chooses

Such a scheme could already be implemented at the scheduler level. Individual validators intentionally have agency to price transactions as they choose, which allows them to outcompete potato validators, create large blocks, and ultimately keep fees lower for users. Not only will this scheme make fees unpredictable for users, it will also make them higher. If this truly is the best scheme, validators that use it at the scheduler level will be able to outcompete validators that don't.

Oracles, Crankers, Market-Makers are just 3 of the many categories of users whom will be affected by this change. Raising fees for these players, of course, just ends up as worse UX / higher spreads for retail!

@apfitzge
Copy link
Contributor

@eugene-chen

to everyone saying "real priority fees have never been tried", can you link to a spec for what the behavior is supposed to be and how that improves the UX? pardon my ignorance but i don't think i've ever seen a description of what the intended behavior of this system is

even a perfect first-price gas auction (which is not achievable with continuous block building) is worse than a protocol-enforced min fee, in terms of UX

  • priority-fees are a tip to the leader to incentivize them to include your transaction
  • leaders acting in their own economic self-interest would include higher-priority transactions, because they will lead to increased profit.

There's no spec on behavior because the leader is free to do whatever they want with them. Can only speak for myself, but I think priority-fees have not really been given a chance for a few reasons:

  1. there's only 1 client right now on mnb, the labs client
  2. the labs client's implementation of banking stage does not act in economic self-interest.

expanding on 2 - the current (soon to be legacy 🙏 ) implementation has nearly independent threads, all of which race to take account-locks. This leads to lower-priority transactions getting executed before higher-priority ones, even when they are competing.
This has been widely observed and known in the community. I think because of this non-reliability devs/users have not, until recently, used priority fees widely. Even with recent adoption, because of this race for locks, we can often see unprioritized or low-priority arb getting included into the block, even when there were higher-priority "productive" txs that requested access to that state. This encourages people to spam these arb txs, because even if the probability they get executed is relatively low there's still a chance they get lucky and a thread grabs their locks first so they get first access.

There are major changes coming in 1.18 which makes the leader respect priority much better, which hopefully will discourage spammed low-priority arb transactions - since they will have a significantly decreased probability of success.


I'm not totally against this proposal, I'm happy we're seeing some alternative ideas proposed. But I do think we need to consider the context of the current implementation with respect to already implemented economic models. I think it's prudent to hold off on committing to a total economic overhaul, when there are massive improvements in the release pipeline which may already lead to resolutions to the problems we see.

@benedictbrady
Copy link

This is an elegant proposal that addresses many of the UX issues on Solana

If a leader observes a transaction that write locks certain accounts it is very hard for them to know whether they should include this transaction immediately or wait until they see another transaction write locking similar accounts that will pay a higher fee. The best way to make this determination would be to look over the previous N blocks and see if it is higher or lower than the average fee paid to write lock the N accounts.

The main issues with the current solana fee markets are

  • The scheduler is not implementing any algorithm like this (fixed in this proposal)
  • For UX all the validators should implemented roughly the same cutoff (fixed in this proposal)
  • The validators should communicate this to the users so that they know how much to pay to get included in the next block (can be fixed by this proposal)

In protocol fee market floors that target 50ish% block usage are good because you are very clearly communicating with the user something like "if you pay this much we will basically try to include you immediately otherwise we will drop you on the floor"

At the end of the day if you implement good surge pricing you can also drop fees for other less contentious operations.

The reason this helps with spam is that the reason to spam is to trick bad scheduler implementations that do not look back an accurately estimate the opportunity cost of including a dumb cheap transaction that write locks every defi account. You are now pricing this opportunity cost for them and also telling the user how you are valuing it

@jacobcreech jacobcreech changed the title Exponential fee for write lock accounts SIMD-110: Exponential fee for write lock accounts Jan 23, 2024
@y2kappa
Copy link

y2kappa commented Jan 23, 2024

"if you pay this much we will basically try to include you immediately otherwise we will drop you on the floor"

This promise is not any better than the current promise of inclusion. The validators cannot make such promises anyway. That's because everyone will have the same information (the floor price being ema) so everyone will start bribing from a higher floor. How does that improve probability of inclusion if everyone just bids higher?

Since everyone will start at a higher price it will be equally contentious so UX will not improve in any way.

This proposal does nothing but increase the price from which spam happens, but it will keep happening. There is really no gain in terms of deterring spam.

@eugene-chen
Unfortunately the price discovery for the best priority fee is a function of exogenous variables independent of historical locking pattern of current account and no pricing mechanism will change that. It's an unsolvable problem at the validator level. Even if the actual model is first price auction and you suggest a higher starting price, it will just raise everyone's starting values by the same amount, you still don't know what the outcome will be.

At most, this just makes things more expensive instead of letting the market decide that.

@aeyakovenko
Copy link

aeyakovenko commented Jan 24, 2024

@y2kappa priority fees alone can't work.

if a spammer has a 1/100 probability of inclusion that has an ROI of $0.01, they will send 100 txs, 99 of them get dropped, 1 gets included and pays a fee. Leader has to deal with 99 of the txs. if the account write lock fee > $0.01, spammer wont send the tx, or they will get dropped really quickly because the fee payer can't afford the fee, which is a low resource check.

The 99 txs that are in the pipeline increase the work that all leaders need to do to land successful txs. This proposal addresses the steady state spam that leaders see without increasing costs for all users.

Since everyone will start at a higher price it will be equally contentious so UX will not improve in any way.

This is wrong. Only contentious accounts will see a fee increase. globally there is no increase. So general users will not be impacted.

@apfitzge
Copy link
Contributor

apfitzge commented Jan 24, 2024

if a spammer has a 1/100 probability of inclusion that has an ROI of $0.01, they will send 100 txs, 99 of them get dropped, 1 gets included and pays a fee. Leader has to deal with 99 of the txs. if the account write lock fee > $0.01, spammer wont send the tx, or they will get dropped really quickly because the fee payer can't afford the fee, which is a low resource check.

@aeyakovenko if there is such a low probability of inclusion then it seems this proposal primarily affects "legitimate" users. It seems this just raises the floor-price for arbitrage txs to be worth it.

@aeyakovenko
Copy link

aeyakovenko commented Jan 24, 2024

@apfitzge cost has to increase above the cost of spam, and if there are users below that cost they will be effected, no way around it. But those users are already effected because spam increases the the load, so txs take longer to get to the scheduler, priority fees are higher, etc...

Economically should be the same to legit users. Spammer is willing to pay X total fee, as the base fee rises, they are still willing to pay up to X total fee, therefore their priority fee drops. Legit users not accessing that account that pay X now get relatively higher priority then spammers.

@SOELTH
Copy link

SOELTH commented Jan 24, 2024

Legit users not accessing that account that pay X now get relatively higher priority then spammers.

It would only be relatively higher due to issues with scheduler/quic in the first place. If we eliminate quic/scheduler jitter (which we will eventually anyways), the value of spam greatly diminishes.

This would still raise the base fee for normal users -- nevermind oracles, market makers, etc.

@aeyakovenko
Copy link

@tao-stones worth running some data on 1.01% increase 1.015% or 1.02% decrease per block. so faster decay

@brianlong
Copy link

brianlong commented Jan 24, 2024

Re-calibrating my understanding...

|---------------|----------|--------------|-----------|-------------|
| Simplified pipeline:                                              |
|                                                                   |
| [sig verify] => [dedup] => [fee check] => [etc...] => [scheduler] |
|                                                                   |
| Congestion here causes trouble               =>         here      | 
|---------------|----------|--------------|-----------|-------------|
  • Congestion early in the pipeline causes starvation problems later
  • The SIMD proposal is to dynamically raise the Base Fee for hot accounts so TXs can be filtered in the Fee-Check stage
  • Major Point: CANNOT use market-based priority fees because the goal is to block earlier in the pipeline

@brianlong
Copy link

brianlong commented Jan 24, 2024

Other thoughts to consider:

  • The Base Fee represents the cost of inclusion. Base Fee determined algorithmically.
  • Priority fee represents ordering within the block. Priority fee determined by market forces.
  • Devs will adapt by sharding hot accounts (already happening)
  • What happens if the number of hot accounts increases to the point they consume all the block space?
  • Can other TXs optionally pay a higher Base Fee for inclusion? (or simply pay a higher Priority Fee?)

Edited: The fee does not guarantee inclusion. It merely guarantees that the TX is not dropped at the fee gate.

@aeyakovenko
Copy link

@SOELTH This would still raise the base fee for normal users -- nevermind oracles, market makers, etc.

That cost exists for the network though, and normal users already pay it. leaders ingress 50k-100k txs per second atm, but only 500-1000 land into the block. This translates into poor priority pricing and confirmation delays for the user. Given that more then 50% of current CUs fail, it is safe to set the write lock base price to target 50% load. If that drops the ingress rate to 5k, users get much faster confirmations, and much faster inclusion, and therefore much better roi per priority fee.

@tao-stones
Copy link
Contributor Author

The SIMD proposal is to dynamically raise the Base Fee for hot accounts so TXs can be filtered in the Fee-Check stage

Nit: could be confusing by calling it "Base Fee", Base Fee is static and 50% burnt. Proposed write-lock fee is dynamic, and 100% burnt.

What happens if the number of hot accounts increases to the point they consume all the block space?
Can other TXs optionally pay a higher Base Fee for inclusion? (or simply pay a higher Priority Fee?)

In current version where priority is solely ordered by cu_price, it doesn't matter if accounts are hot;

If priority is based on (total_fee / total_cu), then hot accounts will push up Transfer transaction's priority fee, if it wants to land even without touch any hot accounts.

In both case, high enough priority fee will be able to outbid tx access lowest hot account.

@aeyakovenko
Copy link

What happens if the number of hot accounts increases to the point they consume all the block space?
Can other TXs optionally pay a higher Base Fee for inclusion? (or simply pay a higher Priority Fee?)

Overall cost of inclusion should be lower for non congested accounts. Validators get 100% of the priority fee, the write lock fee is 100% burned. So there isn't an incentive to include any hot account txs unless they also outbid the priority fees. So total costs for hot accounts are going to be higher.

@SOELTH
Copy link

SOELTH commented Jan 25, 2024

@SOELTH if i understand correctly your argument boils down to 'let validators decide which scheduler they want; don't enforce changes to the fee market within consensus'. This greatly overestimates most validators' ability (and willingness) to modify their own clients. Running a validator is complex enough as it is; i doubt validators would want to risk having their blocks skipped. Fixing the scheduler & the network layer needs to be done anyways; but so long as jitter is nonzero there will always be incentive to spam.

Validators don't have to modify their own clients. Labs/Firedancer will both make better implementations of the scheduler. If they want to modify their own clients though, they are absolutely free to. The protocol should be kept as minimally restrictive as reasonable possible.

The crux of my argument is that the aforementioned problems can be solved by implementation details and don't require adding complexity to the protocol.

@aeyakovenko
Copy link

@SOELTH

The crux of my argument is that the aforementioned problems can be solved by implementation details and don't require adding complexity to the protocol.

I don't think there is any evidence to support this argument.

  • 100kpps ingress caused by a few hot accounts is a negative externality that impacts all users. There is just so much we optimize while supporting a wide range of hardware.

  • 12M CU saturated accounts cause replay spikes across a wide range of hardware that impact all
    users.

Either everyone is way over provisioned, or the system has economic back pressure

@SOELTH
Copy link

SOELTH commented Jan 26, 2024

@SOELTH

The crux of my argument is that the aforementioned problems can be solved by implementation details and don't require adding complexity to the protocol.

I don't think there is any evidence to support this argument.

  • 100kpps ingress caused by a few hot accounts is a negative externality that impacts all users. There is just so much we optimize while supporting a wide range of hardware.

Dropping spam packets is trivial -- and a protocol change won't stop invalid spam. Leader can quickly identify whether a packet is valid and high priority or invalid/low priority.

Regardless, spam won't be profitable or +EV if we reduce jitter

  • 12M CU saturated accounts cause replay spikes across a wide range of hardware that impact all
    users.

If too many nodes can't handle replay. This is a problem with block limits

Either everyone is way over provisioned, or the system has economic back pressure

The network never has enough capacity for everyone. This is why priority fees exist in the first place

@aeyakovenko
Copy link

@SOELTH

Push changes that improves confirmation times for non congested txs. They used to be 1s on the same hardware as today. What's changed is that there is 100kpps of spam filling up all the pipelines. If you think it's easy to fix, by all means, show us.

@SOELTH
Copy link

SOELTH commented Jan 26, 2024

@SOELTH

Push changes that improves confirmation times for non congested txs. They used to be 1s on the same hardware as today. What's changed is that there is 100kpps of spam filling up all the pipelines. If you think it's easy to fix, by all means, show us.

Happy to help outline proper fixes!

Some things that will help

  • Improved scheduler (both Andrew's and FD)
  • Eliminating QUIC or at least nixxing quinn
  • Relaxing tx constraints so it's easier to filter from invalid spam

I've been meaning to push some analytics and writeups for a awhile -- but I'm one person with a full time job doing this completely for free. I don't have any $100m grants, so forgive me if I don't have anything pushed tommorow 😄

@aeyakovenko
Copy link

aeyakovenko commented Jan 26, 2024

@SOELTH

Andrew and Richard are looking already at scheduler and quic improvements. They shouldn't block experimenting with write lock fees.

A year ago the implementation was worse but had 1.4s P90 confirmation times on the same hardware. The difference is that leaders had 5k pps of traffic to deal with. We can a/b test all the changes and see if write lock fees work or not and reducing the load selectively.

@SOELTH
Copy link

SOELTH commented Jan 26, 2024

@SOELTH

Andrew and Richard are looking already at scheduler and quic improvements. They shouldn't block experimenting with write lock fees.

I'm not saying this is a blocker at all. I'm saying the aforementioned problems are largely solved by scheduler and quic improvements.

I'm not against experimenting with write lock fees either (I do think write locks are extremely mispriced as it costs nothing extra to lock a +1 account).

I do think however there's a balance between planning fees at the protocol level vs pricing transactions at the scheduler level. Centrally-planned EMA fees will hinder fast blockbuilders' ability to keep fees low and outcompete potato hardware/algos.

A year ago the implementation was worse but had 1.4s P90 confirmation times on the same hardware. The difference is that leaders had 5k pps of traffic to deal with. We can a/b test all the changes and see if write lock fees work or not and reducing the load selectively.

I remember, and I'm happy we can be talking about such a scale of throughput problems today! I think we just disagree on the solution.

@aeyakovenko
Copy link

aeyakovenko commented Jan 26, 2024

I do think however there's a balance between planning fees at the protocol level vs pricing transactions at the scheduler level. Centrally-planned EMA fees will hinder fast blockbuilders' ability to keep fees low and outcompete potato hardware/algos.

maybe, or it the write lock fee will increase until the pipeline is no longer saturated and priority fees will take over, and users will get more effective prioritization per lamport because all the pipelines leading up to the scheduler aren't at the red line.

I'm not saying this is a blocker at all. I'm saying the aforementioned problems are largely solved by scheduler and quic improvements.

i don't have any evidence to suggest that its true. so far data is showing me the contrary. Like should we limit the ingress bandwidth to a super conservative 20kpps or something?
same time slice, spam shoots up, confirmation times increase. All the spam is targeting the same jup wen airdrop accounts. With the current design the write lock fee would go up until spammers back off and prioritization will take over

Screenshot 2024-01-26 at 7 40 49 AM Screenshot 2024-01-26 at 7 40 57 AM

@SOELTH
Copy link

SOELTH commented Jan 26, 2024

I do think however there's a balance between planning fees at the protocol level vs pricing transactions at the scheduler level. Centrally-planned EMA fees will hinder fast blockbuilders' ability to keep fees low and outcompete potato hardware/algos.

maybe, or it the write lock fee will increase until the pipeline is no longer saturated and priority fees will take over, and users will get more effective prioritization per lamport because all the pipelines leading up to the scheduler aren't at the red line.

As a validator I want to be able to price my own compute. If I can't price my own compute there's incentive to fork the chain and outcompete with a better blockpacking algo -- a better algo that can handle the throughput at line rate. I'm not saying this is an easy problem, but I am absolutely saying we can do it.

I'm not saying this is a blocker at all. I'm saying the aforementioned problems are largely solved by scheduler and quic improvements.

i don't have any evidence to suggest that its true. so far data is showing me the contrary. Like should we limit the ingress bandwidth to a super conservative 20kpps or something? same time slice, spam shoots up, confirmation times increase. All the spam is targeting the same jup wen airdrop accounts. With the current design the write lock fee would go up until spammers back off and prioritization will take over

I argue that metrics dropping under times of high kpps is a symptom of a poor networking stack and a poor scheduler. Instead of attempting to artificially limit kpps with centrally planned fees (which also doesn't solve invalid spam), I propose we move to handle pps at line rate. If a single IP is spamming too many handshakes, we can easily cut them off.

Confirmation times with the current scheduler/quic will absolutely drop with pps. I don't disagree with you there!

@aeyakovenko
Copy link

@SOELTH

I propose we move to handle pps at line rate. If a single IP is spamming too many handshakes, we can easily cut them off.

Andrew and Pankaj are working on it. If the implementation proves to be sufficient then exponential write lock fees can be turned off.

Gotta fix the fires in front of us.

@SOELTH
Copy link

SOELTH commented Jan 26, 2024

@SOELTH

I propose we move to handle pps at line rate. If a single IP is spamming too many handshakes, we can easily cut them off.

Andrew and Pankaj are working on it. If the implementation proves to be sufficient then exponential write lock fees can be turned off.

Gotta fix the fires in front of us.

If we want to test the strategy, is the scheduler level not the best place to do it? imo Validators should be able to price their own compute. Rather than adding protocol complexity/constraints -- we could spin up some clients on mainnet/testnet with the new strategy implemented at the scheduler level.

If we do it this way we also don't need a feature gate every time we want to make an adjustment

@tao-stones
Copy link
Contributor Author

Validators should be able to price their own compute.

Wouldn't this make UX worse? Instead of pricing based on cluster status, users have to price based on leader that packs their transactions?

@apfitzge
Copy link
Contributor

Validators should be able to price their own compute.

Wouldn't this make UX worse? Instead of pricing based on cluster status, users have to price based on leader that packs their transactions?

But that doesn't really change in this proposal, this just sets a floor on top of which users still need to compete in priority fee markets. Their total fee would just be (ema fee(s) + priority fee to get into block), and if they are unwilling to raise their total fee they are less likely to get into the block, because the leader has no incentive to pack these high ema fee txs if they don't also have high priority

@SOELTH
Copy link

SOELTH commented Jan 29, 2024

Validators should be able to price their own compute.

Wouldn't this make UX worse? Instead of pricing based on cluster status, users have to price based on leader that packs their transactions?

No, even a naive client could use the median fee and have phenomenal chances of getting though. Chances getting dropped would fall on a geometric distribution over N leaders. During times of high volatility / high fees, a slightly less naiive client could just narrow the sample size for the median or weight for recent leaders higher.

More sophisticated clients like can use their own strategies and track individual validators.

Free market.

@aeyakovenko
Copy link

Some research https://x.com/umbraresearch/status/1752361516572016925?s=46

@aeyakovenko
Copy link

@SOELTH

Chances getting dropped would fall on a geometric distribution over N leaders.

User has to sign max they are willing to pay, and pay it. vs user signs a tx that pays up to the max if the write lock fee scales all the way there, but only pays what is the current offer price. I'd be really surprised if your proposal doesn't result in either users overpaying above the floor price or more time outs and resigning.


## Security Considerations

none
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can potentially slow the leader by spamming it with transactions using 256 accounts in an ALT all write locked.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, this isn't unique to this proposal tho. ALT needs to be loaded anyway, the proposal adds LRU cache lookup

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

max account locks is only 64. Scheduler should use locks in the equation however

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feature activation pending right?
9LZdXeKGeBV6hRLdxS1rHbHoEUsKqesCC2ZAPTPKJAbK | inactive | NA | increase tx account lock limit to 128 #27241

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to feature schedule, it is blocked atm.

@SOELTH
Copy link

SOELTH commented Jan 31, 2024

@SOELTH

Chances getting dropped would fall on a geometric distribution over N leaders.

User has to sign max they are willing to pay, and pay it. vs user signs a tx that pays up to the max if the write lock fee scales all the way there, but only pays what is the current offer price.

This is just shifting the pricing burden from the leader to the protocol and stifling validator commission though -- which will stifle competition between validators and reduce a smart blockbuilder's ability to keep fees low.

I'd be really surprised if your proposal doesn't result in either users overpaying above the floor price or more time outs and resigning.
This proposal forces overpaying for Oracles/Mms that need to run 24/7. Yet more ways that these costs will get passed on to users. Tracking the right fee to pay is a client-side problem and for your average user a naïve strategy would no doubt do quite well.

Let validators price their own compute or they will fork the chain and use a less naiive pricing method

## New Terminology

- *compute-unit utilization*: denominated in `cu`, it represents total
compute-units applied to a given resource.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the utilization count both read and write transactions, or just write transactions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just write lock in this proposal, it is possible to implement similar fee to read locks, but rather do that after found success of write lock fee.

Copy link

@jarry-xiao jarry-xiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At a high level, I think this proposal reasonably addresses the inevitable problem that all supply-constrained systems face. When blockspace demand drastically overpowers supply, the market price needs to adjust to a stable equilibrium. I agree with the need for a controller mechanism because it will positively impact downstream UX related to transaction inclusion (for both developers and application users).

A few notes:

  • This is an implementation detail, but the shape of the controller is likely pretty important if this is to be a breaking change on the protocol level. It's not clear that exponential is the right way to go.
  • IMO any protocol level change is riskier than a client specific implementation fix, so I think how changes get applied is a very practical thing to do before jumping the gun. Sequencing wise, I think it makes sense to enable this particular change after @apfitzge's introduces key client level infrastructure changes in 1.18.

@SOELTH
Copy link

SOELTH commented Feb 5, 2024

At a high level, I think this proposal reasonably addresses the inevitable problem that all supply-constrained systems face. When blockspace demand drastically overpowers supply, the market price needs to adjust to a stable equilibrium. I agree with the need for a controller mechanism because it will positively impact downstream UX related to transaction inclusion (for both developers and application users).

A few notes:

  • This is an implementation detail, but the shape of the controller is likely pretty important if this is to be a breaking change on the protocol level. It's not clear that exponential is the right way to go.

This is why it'd be great to leave this up to the implementation of individual schedulers, instead working on a better vehicle for communication between validators / clients on fees and better client fee pricing models (once priority fees actually work, that is!). This will still result in good client UX whilst allowing validators to compete on fees and quickly test different fee models.

  • IMO any protocol level change is riskier than a client specific implementation fix, so I think how changes get applied is a very practical thing to do before jumping the gun. Sequencing wise, I think it makes sense to enable this particular change after @apfitzge's introduces key client level infrastructure changes in 1.18.

Scheduler will help a ton, but we also need to eliminate quinn too or I feel people will still spam due to network jitter.

Eliminating jitter is uncontroversial and will also abolish the need to spam for the most part. I'd have a hard time seeing the need for this proposal or anything like it if scheduler was fixed and quinn was eliminated.

tldr and to stay on topic: I think resources are better allocated to solving these engineering problems instead of taking an ETH-centric "solve it with more fees" approach.

…espond account saturation events quick enough. Changed to 8
… decreasing un-write-locked account per block, and eventually evict from cache when cost rate is zero
@tao-stones
Copy link
Contributor Author

To share some data from replaying mnb ledger during Jup drop (about 9700 slots).

  1. saturated account's ema and cost-rate:
image Using 8 slots for EMA and 25% as target utilization, the ema _very_ closely following per block cu-utilization. the saturated account's fee-rate change following this pattern: as soon as account is "hot" (eg exceeds target utilization), fee-rate jump from 0 to 1 millilamport-per-cu, it keeps going up by 1% per slot if account stays hot; otherwise it gradually cools off to zero. Then be evicted from cache. Repeat.
  1. write-lock-fee would be impact:
image Assume spammers have a price `x` in mind to stay positive ROI, they set prio fee to `x` today; if write-lock-fee is `>= x`, spammer would not send tx. With this, about 5% txs in ledger would have been excluded by write-lock-fee, majority of them are failed txs. this 5% _only_ applies to txs already in block (limits of backtesting with ledger). Safe to assume much higher number on banking_stage.
  1. Fees collected and distributed:
image

The total write-lock-fee collected, in this test, are less than signature-fees. Rewards went up due to 100% reward prio fee.

  1. Failed transaction's write-locks distribution:
image

Most account write-locks by failed transaction are non-saturated; however during congestion, failed transactions locks more saturated accounts, as expected. This proposal aims to reduce number of locked to saturated accounts by failed transactions.

@sakridge sakridge added the core Standard SIMD with type Core label Feb 9, 2024
@CantelopePeel
Copy link

The Firedancer team (largely myself) is spending a large amount of time trying to do a root cause analysis of why the client cannot properly price inbound transactions and why users are unable to land transactions during times of congestion.

Our preliminary findings seem to indicate that a pile up of transactions at the RPC/TPU layer can considerably slow the validator done.

We are also trying to substantiate the facts about Solana's fee market and the microstructure therein. We believe there are several issues with Solana's fee model and networking layer and block production which require different solutions to address both spam and priority fee based inclusion. This is time consuming so please be patient.

This proposal aims to solve the issue of inclusion by imposing a tax (and I mean this in purely economical terms, this is not a dig) on end users for high consistent resource consumption of a write lock account. This excise is burned, reducing supply marginally (2-3 OoM lower than inflation).

What is very clear and evident is that this proposal does not address a few key concerns which will not be fixed by this proposal:

  • This does not prevent spam, rather it allows the block producer to create blocks which, under high account contention, can enable a tax on end users. The block producer has to decide to build a block which exceeds that threshold, which they may not do if the EMA window is long enough.
  • This proposal does not fix the root cause of the underlying issue of handling spam at the networking layer, it merely taxes users for capacity. If the validator cannot filter out mispriced transactions at line rate, that is the validator's issue, not her users issue.
  • This proposal does not solve supply-side competition (aka competition between validators and the block producer) for blockspace on Solana, which we view as one of the principle issues causing mispricing of blockspace on Solana today.
  • Burning this write lock fee is not profitable to validators. They (and users) are better off taking a side payment and reducing supply to half (or some equilibria near that setpoint).

Given these issues, it is unclear precisely how this proposal aims to solve the issues of spam if these issues cannot be addressed.

This proposal does get a few things right:

  • a controller mechanism is necessary, but we must be careful to not allow a fee cartel (again, an academic term) to form (this is solvable).
  • there are many resources at play to consider in each block in order to price it out.

The issue of spam cannot be solved entirely in protocol and we should endeavour to find the right solution, not the one which we will have to undo later. I would like to not have to think about this for another decade if I could, so we will find that solution. The more time we spend now arriving at the correct specification, the less time we spend on implementation or undoing things later.

Again, this proposal is not faulty; the premise is good, but the economics most likely will not have the desired effect (due to the aforementioned issues showing the proposal does not address the root of the matter directly). It will require another iteration and some analysis from our end.

We will publish a report based on our findings and counterproposal to address these issues after mtnDAO. I would encourage anyone who has any feedback or questions about this reach out to my various socials (by the same handle). We hope this proposal will remain simple and provide a comprehensive specification to improve inclusion (aka UX) on Solana.

- worst case per block: 128 * 8 = 1024;
- 2 times worst case: 2048;
- Fee Handling:
- Collected write-lock fees are 100% burnt.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why aren't read locks considered here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other Considerarion section acknowledges read-lock contention and possibility of applying same EMA mechanism. Prefer doing that after seeing success on write-lock first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Standard SIMD with type Core
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet